151 research outputs found

    Ontological Matchmaking in Recommender Systems

    Full text link
    The electronic marketplace offers great potential for the recommendation of supplies. In the so called recommender systems, it is crucial to apply matchmaking strategies that faithfully satisfy the predicates specified in the demand, and take into account as much as possible the user preferences. We focus on real-life ontology-driven matchmaking scenarios and identify a number of challenges, being inspired by such scenarios. A key challenge is that of presenting the results to the users in an understandable and clear-cut fashion in order to facilitate the analysis of the results. Indeed, such scenarios evoke the opportunity to rank and group the results according to specific criteria. A further challenge consists of presenting the results to the user in an asynchronous fashion, i.e. the 'push' mode, along with the 'pull' mode, in which the user explicitly issues a query, and displays the results. Moreover, an important issue to consider in real-life cases is the possibility of submitting a query to multiple providers, and collecting the various results. We have designed and implemented an ontology-based matchmaking system that suitably addresses the above challenges. We have conducted a comprehensive experimental study, in order to investigate the usability of the system, the performance and the effectiveness of the matchmaking strategies with real ontological datasets.Comment: 28 pages, 8 figure

    Cut and Paste

    Get PDF
    AbstractThe paper develops Editor, a language for manipulating semistructured documents, such as those typically available on the Web. Editor programs are based on two simple ideas, taken from text editors: “search” instructions are used to select regions of interest in a document, and “cut & paste” instructions to restructure them. We study the expressive power and the complexity of these programs. We show that they are computationally complete, in the sense that any computable document restructuring can be expressed in Editor. We also study the complexity of a safe subclass of programs, showing that it captures exactly the class of polynomial-time restructurings. The language has been implemented in Java and is currently used in the Araneus project as a basis for a wrapper-generation toolkit

    On federated single sign-on in e-government interoperability frameworks

    Get PDF
    We consider the problem of handling digital identities within serviceoriented architecture (SOA) architectures. We explore federated, single signon (SSO) solutions based on identity managers and service providers. After an overview of the different standards and protocols, we introduce a middlewarebased architecture to simplify the integration of legacy systems within such platforms. Our solution is based on a middleware module that decouples the legacy system from the identity-management modules.We consider both standard point-to-point service architectures, and complex government interoperability frameworks, and report experiments to show that our solution provides clear advantages both in terms of effectiveness and performance

    Diogene-CT: tools and methodologies for teaching and learning coding

    Get PDF
    AbstractComputational thinking is the capacity of undertaking a problem-solving process in various disciplines (including STEM, i.e. science, technology, engineering and mathematics) using distinctive techniques that are typical of computer science. It is nowadays considered a fundamental skill for students and citizens, that has the potential to affect future generations. At the roots of computational-thinking abilities stands the knowledge of computer programming, i.e. coding. With the goal of fostering computational thinking in young students, we address the challenging and open problem of using methods, tools and techniques to support teaching and learning of computer-programming skills in school curricula of the secondary grade and university courses. This problem is made complex by several factors. In fact, coding requires abstraction capabilities and complex cognitive skills such as procedural and conditional reasoning, planning, and analogical reasoning. In this paper, we introduce a new paradigm called ACME ("Code Animation by Evolved Metaphors") that stands at the foundation of the Diogene-CT code visualization environment and methodology. We develop consistent visual metaphors for both procedural and object-oriented programming. Based on the metaphors, we introduce a playground architecture to support teaching and learning of the principles of coding. To the best of our knowledge, this is the first scalable code visualization tool using consistent metaphors in the field of the Computing Education Research (CER). It might be considered as a new kind of tools named as code visualization environments

    GROM: a general rewriter of semantic mappings

    Get PDF
    We present GROM, a tool conceived to handle high-level schema mappings between semantic descriptions of a source and a target database. GROM rewrites mappings between the virtual, view-based semantic schemas, in terms of mappings between the two physical databases, and then executes them. The system serves the purpose of teaching two main lessons. First, designing mappings among higher-level descriptions is often simpler than working with the original schemas. Second, as soon as the view-definition language becomes more expressive, to handle, for example, negation, the mapping problem becomes extremely challenging from the technical viewpoint, so that one needs to find a proper trade-off between expressiveness and scalability.Peer ReviewedPostprint (published version

    Greg, ML – Machine Learning for Healthcare at a Scale

    Get PDF
    This paper introduces the Greg, ML platform, a machine-learning engine and toolset conceived to generate automatic diagnostic suggestions based on patient profiles. Greg, ML departs from many other experiences in machine learning for healthcare in the fact that it was designed to handle a large number of different diagnoses, in the order of the hundreds. We discuss the architecture that stands at the core of Greg, ML, designed to handle the complex challenges posed by this ambitious goal, and confirm its effectiveness with experimental results based on the working prototype we have developed. Finally, we discuss challenges and opportunities related to the use of this kind of tools in medicine, and some important lessons learned while developing the tool. In this respect, we underline that Greg, ML should be conceived primarily as a support for expert doctors in their diagnostic decisions, and can hardly replace humans in their judgment

    What is the IQ of your data transformation system?

    Full text link
    Mapping and translating data across different representations is a crucial problem in information systems. Many formalisms and tools are currently used for this purpose, to the point that devel- opers typically face a difficult question: “what is the right tool for my translation task?” In this paper, we introduce several techniques that contribute to answer this question. Among these, a fairly gen- eral definition of a data transformation system, a new and very effi- cient similarity measure to evaluate the outputs produced by such a system, and a metric to estimate user efforts. Based on these tech- niques, we are able to compare a wide range of systems on many translation tasks, to gain interesting insights about their effective- ness, and, ultimately, about their “intelligence”

    Database challenges for exploratory computing

    Get PDF
    Helping users to make sense of very big datasets is nowadays considered an important research topic. However, the tools that are available for data analysis purposes typically address professional data scientists, who, besides a deep knowledge of the domain of interest, master one or more of the following disciplines: mathematics, statistics, computer science, computer engineering, and programming. On the contrary, in our vision it is vital to support also different kinds of users who, for various reasons, may want to analyze the data and obtain new insight from them. Examples of these data enthusiasts [4, 9] are journalists, investors, or politicians: non-technical users who can draw great advantage from exploring the data, achieving new and essential knowledge, instead of reading query results with tons of records. The term data exploration generally refers to a data user being able to find her way through large amounts of data in order to gather the necessary information. A more technical definition comes from the field of statistics, introduced by Tukey [12]: with exploratory data analysis the researcher explores the data in many possible ways, including the use of graphical tools like boxplots or histograms, gaining knowledge from the way data are displayed. Despite the emphasis on visualization, exploratory data analysis still assumes that the user understands at least the basics of statistics, while in this paper we propose a paradigm for database exploration which is in turn inspired by the exploratory computing vision [2]. We may describe exploratory computing as the step-by-step “conversation” of a user and a system that “help each other” to refine the data exploration process, ultimately gathering new knowledge that concretely fullfils the user needs. The process is seen as a conversation since the system provides active support: it not only answers user’s requests, but also suggests one or more possible actions that may help the user to focus the exploratory session. This activity may entail the use of a wide range of different techniques, including the use of statistics and data analysis, query suggestion, advanced visualization tools, etc. The closest analogy [2] is that of a human-tohuman dialogue, in which two people talk, and continuously make reference to their lives, priorities, knowledge and beliefs, leveraging them in order to provide the best possible contribution to the dialogue. In essence, through the conversation they are exploring themselves as well as the information that is conveyed through their words. This exploration process therefore means investigation, exploration-seeking, comparison-making, and learning altogether. It is most appropriate for big collections of semantically rich data, which typically hide precious knowledge behind their complexity. In this broad and innovative context, this paper intends to make a significant step further: it proposes a model to concretely perform this kind of exploration over a database. The model is general enough to encompass most data models and query languages that have been proposed for data management in the last few years. At the same time, it is precise enough to provide a first formalization of the problem and reason about the research challenges posed to database researchers by this new paradigm of interaction
    • …
    corecore